auditory system
Listen to the Waves: Using a Neuronal Model of the Human Auditory System to Predict Ocean Waves
Matysiak, Artur, Roeber, Volker, Kalisch, Henrik, König, Reinhard, May, Patrick J. C.
Artificial neural networks (ANNs) have evolved from the 1940s primitive models of brain function to become tools for artificial intelligence. They comprise many units, artificial neurons, interlinked through weighted connections. ANNs are trained to perform tasks through learning rules that modify the connection weights. With these rules being in the focus of research, ANNs have become a branch of machine learning developing independently from neuroscience. Although likely required for the development of truly intelligent machines, the integration of neuroscience into ANNs has remained a neglected proposition. Here, we demonstrate that designing an ANN along biological principles results in drastically improved task performance. As a challenging real-world problem, we choose real-time ocean-wave prediction which is essential for various maritime operations. Motivated by the similarity of ocean waves measured at a single location to sound waves arriving at the eardrum, we redesign an echo state network to resemble the brain's auditory system. This yields a powerful predictive tool which is computationally lean, robust with respect to network parameters, and works efficiently across a wide range of sea states. Our results demonstrate the advantages of integrating neuroscience with machine learning and offer a tool for use in the production of green energy from ocean waves.
- North America > United States > New Jersey > Bergen County > Mahwah (0.04)
- North America > United States > Hawaii > Honolulu County > Honolulu (0.04)
- Europe > United Kingdom > England > Cambridgeshire > Cambridge (0.04)
- (4 more...)
- Health & Medicine > Therapeutic Area > Neurology (1.00)
- Energy > Renewable > Ocean Energy (1.00)
Parameterising Feature Sensitive Cell Formation in Linsker Networks in the Auditory System
This paper examines and extends the work of Linsker (1986) on self organising feature detectors. Linsker concentrates on the vi(cid:173) sual processing system, but infers that the weak assumptions made will allow the model to be used in the processing of other sensory information. This claim is examined here, with special attention paid to the auditory system, where there is much lower connec(cid:173) tivity and therefore more statistical variability. On-line training is utilised, to obtain an idea of training times. These are then com(cid:173) pared to the time available to pre-natal mammals for the formation of feature sensitive cells.
Reinforcement Learning Predicts the Site of Plasticity for Auditory Remapping in the Barn Owl
The auditory system of the barn owl contains several spatial maps. In young barn owls raised with optical prisms over their eyes, these auditory maps are shifted to stay in register with the visual map, suggesting that the visual input imposes a frame of reference on the auditory maps. However, the optic tectum, the first site of convergence of visual with auditory information, is not the site of plasticity for the shift of the auditory maps; the plasticity occurs instead in the inferior colliculus, which contains an auditory map and projects into the optic tectum. We explored a model of the owl remapping in which a global reinforcement signal whose delivery is controlled by visual foveation. A hebb learning rule gated by rein(cid:173) forcement learned to appropriately adjust auditory maps. In addi(cid:173) tion, reinforcement learning preferentially adjusted the weights in the inferior colliculus, as in the owl brain, even though the weights were allowed to change throughout the auditory system.
Dynamic Modelling of Chaotic Time Series with Neural Networks
The auditory system of the barn owl contains several spatial maps. In young barn owls raised with optical prisms over their eyes, these auditory maps are shifted to stay in register with the visual map, suggesting that the visual input imposes a frame of reference on the auditory maps. However, the optic tectum, the first site of convergence of visual with auditory information, is not the site of plasticity for the shift of the auditory maps; the plasticity occurs instead in the inferior colliculus, which contains an auditory map and projects into the optic tectum. We explored a model of the owl remapping in which a global reinforcement signal whose delivery is controlled by visual foveation. A hebb learning rule gated by rein(cid:173) forcement learned to appropriately adjust auditory maps. In addi(cid:173) tion, reinforcement learning preferentially adjusted the weights in the inferior colliculus, as in the owl brain, even though the weights were allowed to change throughout the auditory system.
An aVLSI Cricket Ear Model
Female crickets can locate males by phonotaxis to the mating song they produce. The behaviour and underlying physiology has been studied in some depth showing that the cricket auditory system solves this complex problem in a unique manner. We present an analogue very large scale integrated (aVLSI) circuit model of this process and show that results from testing the circuit agree with simulation and what is known from the behaviour and physiology of the cricket auditory system. The aVLSI circuitry is now being extended to use on a robot along with previously modelled neural circuitry to better understand the complete sensorimotor pathway. Understanding how insects carry out complex sensorimotor tasks can help in the design of simple sensory and robotic systems. Often insect sensors have evolved into intricate filters matched to extract highly specific data from the environment which solves a particular problem directly with little or no need for further processing [1].
Early sound exposure in the womb shapes the auditory system
Inside the womb, fetuses can begin to hear some sounds around 20 weeks of gestation. However, the input they are exposed to is limited to low-frequency sounds because of the muffling effect of the amniotic fluid and surrounding tissues. A new MIT-led study suggests that this degraded sensory input is beneficial, and perhaps necessary, for auditory development. Using simple computer models of the human auditory processing, the researchers showed that initially limiting input to low-frequency sounds as the models learned to perform certain tasks actually improved their performance. Along with an earlier study from the same team, which showed that early exposure to blurry faces improves computer models' subsequent generalization ability to recognize faces, the findings suggest that receiving low-quality sensory input may be key to some aspects of brain development.
- North America > United States > Massachusetts > Middlesex County > Cambridge (0.40)
- Europe > Switzerland > Vaud > Lausanne (0.05)
- Asia > China (0.05)
Deep Neural Networks Help to Explain Living Brains
In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position and other properties -- something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. "I remember very distinctly the time when we found a neural network that actually solved the task," he said. It was 2 a.m., a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. "I was really pumped," he said. It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years.
- North America > United States > Massachusetts (0.24)
- Europe > Germany (0.04)
Deep Neural Networks Help to Explain Living Brains
In the winter of 2011, Daniel Yamins, a postdoctoral researcher in computational neuroscience at the Massachusetts Institute of Technology, would at times toil past midnight on his machine vision project. He was painstakingly designing a system that could recognize objects in pictures, regardless of variations in size, position and other properties -- something that humans do with ease. The system was a deep neural network, a type of computational device inspired by the neurological wiring of living brains. "I remember very distinctly the time when we found a neural network that actually solved the task," he said. It was 2 a.m., a tad too early to wake up his adviser, James DiCarlo, or other colleagues, so an excited Yamins took a walk in the cold Cambridge air. "I was really pumped," he said. It would have counted as a noteworthy accomplishment in artificial intelligence alone, one of many that would make neural networks the darlings of AI technology over the next few years.
- North America > United States > Massachusetts (0.24)
- Europe > Germany (0.04)
Differences between deep neural networks and human perception
When your mother calls your name, you know it's her voice -- no matter the volume, even over a poor cell phone connection. And when you see her face, you know it's hers -- if she is far away, if the lighting is poor, or if you are on a bad FaceTime call. This robustness to variation is a hallmark of human perception. On the other hand, we are susceptible to illusions: We might fail to distinguish between sounds or images that are, in fact, different. Scientists have explained many of these illusions, but we lack a full understanding of the invariances in our auditory and visual systems.
The promise of AI in audio processing – Towards Data Science
We have seen a rise of AI technologies for image and video processing. Even though things tend to take a little while longer making it to the world of audio, here we have also seen impressive technological advances. In this article, I will summarize some of these advances, outline further potentials of AI in audio processing as well as describe some of the possible pitfalls and challenges we might encounter in pursuing this cause. The kicker for my interest in AI use cases for audio processing was the publication of Google Deepmind's "WaveNet" -- A deep learning model for generating audio recordings [1] which was released during the end of 2016. Using an adapted network architecture, a dilated convolutional neural network, Deepmind researchers succeeded in generating very convincing text-to-speech and some interesting music-like recordings trained from classical piano recordings.
- Media > Music (0.55)
- Leisure & Entertainment (0.55)
- Health & Medicine > Therapeutic Area (0.48)